I just decided to generate striped images rather than sketch them. We can see that the dots in the fourier space are closer to the DC component in the 4 pixel wide stripes than the 2 pixel wide stripes. The horizontal spatial frequency in the 4 pixel wide image is smaller because the stripes (wavelength-ish) are larger.
These should be squares in both images, but there is a wrap-around error when using np.fft.fftshift.
The components of the spectrum are limited to the horizontal axis because this is the axis upon which the colors alternate in the original image, i.e. where there is a spatial frequency. The image is constant in the vertical direction, alongn the major axis of the stripes, so there is only the dc component along this axis in the fourier space.
The one pixel wide stripe image only contains the dc term, alongside 2 additional dots on the very edges of the horizontal axis. This is because a frequency of alternating single pixels represents the highest possible frequency within the discrete digital domain.
A fourier series of order 0 is:
$ C_n = \frac {1}{T} \int_0^T x(\lambda)e^0 d\lambda = \frac {1}{T} \int_0^T x(\lambda) d\lambda $
This means for the 0th component, it is merely an average of the image pixels.
The DC components of the images will ideally be the same. This is because the DC component correspondns to the mean pixel value of the image. Since the number of black and white pixels are equal in all the images, the average values will be the same.
The only reason these would not be the same is if there are an odd number of lines, resulting in one more black or white bar, shifting the average pixel value.
To avoid aliasing, we would need to sample at the Nyquist frequency, which is twice the highest spatial frequency of the image.
$ \frac {250 \space Stripes}{1 \space Horizontal \space Line} \times \frac {1 \space Period}{2 \space Stripes} \times \frac {2 \space Elements}{1 \space Period} = \frac {250 \space Elements}{1 \space Horizontal \space Line}$
Sampling at the Nyquist frequency, we 250 optical elements per horizontal line, with one placed every 2 cm.
The gaussian lowpass filter function is shown below. The filter is created with:
$ H(u,v) = e^{-D^2(u,v)/2D_0^2}$
A cutoff frequency $D_0 = 3$ was selected to only make the largest 'a' legible.
The butterworth filter is implemented below. The equation used is:
$H(u,v) \frac {1} {1 + [D(u,v)/D_0]^{2n}]}$
A cutoff frequency of $D_0 = 5$ and order $n = 2$ were selected to produce similar results to (a).
At a threshold value of 130, only pixels from the top black square are available. This is because at this cut-off corresponds to frequencies larger than the other features in the image, but smaller than the black square.
This allows for center pixels in the large black square to be less dilluted than other black pixels in the image.
The gaussian lowpass filter from (a) was ued. A cutoff frequency of $D_0 = 3$ was selected.
The gaussian lowpass filter from Project 3(a) was used to generate a blurry image in the fourier space, as was asked by the textbook. This was then added to the original image in the image space at k = 1 for unsharpp masking and k > 1 for high-boost filtering.
To recreate the results from Example 4.20, we implement a butterworth highplass filter below:
Using $D_0 = 50$ and $n = 4$, as was used in the textbook, we then threshold at 80 and invert. There is a degree of splotchiness associated with this, and slightly better results were obtained from order $n=1$.
The Laplacian sharpening filter is implemented below.